30 research outputs found

    Capturing and reproducing realistic acoustic scenes for hearing research

    Get PDF

    Evaluating the auralization of a small room in a virtual sound environment using objective room acoustic measures

    Get PDF
    To study human auditory perception in realistic environments, loudspeakerbased reproduction techniques have recently become state-of-the-art. To evaluate the accuracy of a simulation-based room auralization of a small room, objective measures were evaluated. In particular: - early-decay time (EDT) & reverberation time (T20, T30); - clarity (C7, C50, C80); - interaural cross-correlation (IACC); - speech transmission index (STI); - direct-to-reverberant ratio (DRR). Impulse responses (IRs) were measured in an IEC listening room. The room was then modeled in the room acoustics software ODEON, and the same objective measures were evaluated for auralized versions of the playback room. The auralizations were realized using higher-order ambisonics (HOA), mixed-order ambisonics (MOA), and a nearest-loudspeaker method (NL) and reproduced in a virtual sound environment

    Mixed-order Ambisonics recording and playback for improving horizontal directionality

    Get PDF
    Planar (2D) and periphonic (3D) higher-order Ambisonics (HOA) systems are widely used to reproduce spatial properties of acoustic scenarios. Mixed-order Ambisonics (MOA) systems combine the benefit of higher order 2D systems, i.e. a high spatial resolution over a larger usable frequency bandwidth, with a lower order 3D system to reproduce elevated sound sources. In order to record MOA signals, the location of the microphones on a hard sphere were optimized to provide a robust MOA encoding. A detailed analysis of the encoding and decoding process showed that MOA can improve both the spatial resolution in the horizontal plane and the usable frequency bandwidth for playback as well as recording. Hence the described MOA scheme provides a promising method for improving the performance of current 3D sound reproduction systems.7 page(s

    Sound source localization with varying amount of visual information in virtual reality

    Get PDF
    To achieve accurate spatial auditory perception, subjects typically require personal head-related transfer functions (HRTFs) and the freedom for head movements. Loudspeaker-based virtual sound environments allow for realism without individualized measurements. To study audio-visual perception in realistic environments, the combination of spatially tracked head mounted displays (HMDs), also known as virtual reality glasses, and virtual sound environments may be valuable. However, HMDs were recently shown to affect the subjects' HRTFs and thus might influence sound localization performance. Furthermore, due to limitations of the reproduction of visual information on the HMD, audio-visual perception might be influenced. Here, a sound localization experiment was conducted both with and without an HMD and with a varying amount of visual information provided to the subjects. Furthermore, interaural time and level difference errors (ITDs and ILDs) as well as spectral perturbations induced by the HMD were analyzed and compared to the perceptual localization data. The results showed a reduction of the localization accuracy when the subjects were wearing an HMD and when they were blindfolded. The HMD-induced error in azimuth localization was found to be larger in the left than in the right hemisphere. When visual information of the limited set of source locations was provided, the localization error induced by the HMD was found to be negligible. Presenting visual information of hand-location and room dimensions showed better sound localization performance compared to the condition with no visual information. The addition of possible source locations further improved the localization accuracy. Also adding pointing feedback in form of a virtual laser pointer improved the accuracy of elevation perception but not of azimuth perception
    corecore